evolutionary method
RUMC: A Rule-based Classifier Inspired by Evolutionary Methods
As the field of data analysis grows rapidly due to the large amounts The Rule Aggregating ClassifiER (RACER) [7] is a rule-based of data being generated, effective data classification has become increasingly classification algorithm that generates initial rules from training important. This paper introduces the RUle Mutation Classifier dataset records with the same mechanism. However, these rules (RUMC), which represents a significant improvement over the tend to be too specific, making them less effective for classifying Rule Aggregation ClassifiER (RACER). RUMC uses innovative rule new data, particularly when working with small datasets that have mutation techniques based on evolutionary methods to improve few distinct instances. To address this challenge, I introduce the classification accuracy. In tests with forty datasets from OpenML RUle Mutation Classifier (RUMC), a novel algorithm that enhances and the UCI Machine Learning Repository, RUMC consistently outperformed the capabilities of RACER. RUMC aims to improve the handling of twenty other well-known classifiers, demonstrating its various datasets, including high-dimensional and low-sample-size ability to uncover valuable insights from complex data.
- North America > United States > Wisconsin (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Oceania > New Zealand > North Island > Waikato (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- (2 more...)
Reviews: Evolution-Guided Policy Gradient in Reinforcement Learning
Post discussion update: I have increased my score. In particular they took to heart my concern about running more experiments to tease apart why the system is performing well. Obviously they did not run all the experiments I asked for, but I hope they consider doing even more if accepted. I would still like to emphasize that the paper is much more interesting if you remove the focus on SOTA results. Understanding why your system works well, and when it doesn't is much more likely to have a long-lasting scientific impact on the field whereas SOTA changes frequently.
Random Actions vs Random Policies: Bootstrapping Model-Based Direct Policy Search
Hanna, Elias, Coninx, Alex, Doncieux, Stéphane
This paper studies the impact of the initial data gathering method on the subsequent learning of a dynamics model. Dynamics models approximate the true transition function of a given task, in order to perform policy search directly on the model rather than on the costly real system. This study aims to determine how to bootstrap a model as efficiently as possible, by comparing initialization methods employed in two different policy search frameworks in the literature. The study focuses on the model performance under the episode-based framework of Evolutionary methods using probabilistic ensembles. Experimental results show that various task-dependant factors can be detrimental to each method, suggesting to explore hybrid approaches.
Efficient Sparse Artificial Neural Networks
Naji, Seyed Majid, Abtahi, Azra, Marvasti, Farokh
The brain, as the source of inspiration for Artificial Neural Networks (ANN), is based on a sparse structure. This sparse structure helps the brain to consume less energy, learn easier and generalize patterns better than any other ANN. In this paper, two evolutionary methods for adopting sparsity to ANNs are proposed. In the proposed methods, the sparse structure of a network as well as the values of its parameters are trained and updated during the learning process. The simulation results show that these two methods have better accuracy and faster convergence while they need fewer training samples compared to their sparse and non-sparse counterparts. Furthermore, the proposed methods significantly improve the generalization power and reduce the number of parameters. For example, the sparsification of the ResNet47 network by exploiting our proposed methods for the image classification of ImageNet dataset uses 40 % fewer parameters while the top-1 accuracy of the model improves by 12% and 5% compared to the dense network and their sparse counterpart, respectively. As another example, the proposed methods for the CIFAR10 dataset converge to their final structure 7 times faster than its sparse counterpart, while the final accuracy increases by 6%.
Reducing catastrophic forgetting when evolving neural networks
A key stepping stone in the development of an artificial general intelligence (a machine that can perform any task), is the production of agents that can perform multiple tasks at once instead of just one. Unfortunately, canonical methods are very prone to catastrophic forgetting (CF) - the act of overwriting previous knowledge about a task when learning a new task. Recent efforts have developed techniques for overcoming CF in learning systems, but no attempt has been made to apply these new techniques to evolutionary systems. This research presents a novel technique, weight protection, for reducing CF in evolutionary systems by adapting a method from learning systems. It is used in conjunction with other evolutionary approaches for overcoming CF and is shown to be effective at alleviating CF when applied to a suite of reinforcement learning tasks. It is speculated that this work could indicate the potential for a wider application of existing learning-based approaches to evolutionary systems and that evolutionary techniques may be competitive with or better than learning systems when it comes to reducing CF.
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
RL -- Deep Reinforcement Learning (Learn effectively like a human)
With the brute force of GPUs and the better understanding of AI, we beat the GO champions and Face ID comes with every new iPhone. But in the robotic world, training a robot to peel lettuce makes the news. Even with an unfair advantage over computation speed, a computer still cannot manage tasks that we take it for granted. The dilemma is AI does not learn as effectively as the human. We may be just a couple of papers away from another breakthrough or we need to learn more effectively.